ai doomer
The Download: China's dying EV batteries, and why AI doomers are doubling down
The Download: China's dying EV batteries, and why AI doomers are doubling down China figured out how to sell EVs. Now it has to bury their batteries. In the past decade, China has seen an EV boom, thanks in part to government support. Buying an electric car has gone from a novel decision to a routine one; by late 2025, nearly 60% of new cars sold were electric or plug-in hybrids. But as the batteries in China's first wave of EVs reach the end of their useful life, early owners are starting to retire their cars, and the country is now under pressure to figure out what to do with those aging components. The issue is putting strain on China's still-developing battery recycling industry and has given rise to a gray market that often cuts corners on safety and environmental standards.
- Asia > China (1.00)
- Europe > United Kingdom (0.15)
- North America > United States > Massachusetts (0.05)
- Transportation > Passenger (0.91)
- Transportation > Ground > Road (0.91)
- Transportation > Electric Vehicle (0.91)
The AI doomers feel undeterred
But they certainly wish people were still taking their warnings really seriously. It's a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad--very, very bad--for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can't control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international "red lines " to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science's most prestigious awards. But a number of developments over the past six months have put them on the back foot.
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- (2 more...)
The AI Doomers Are Getting Doomier
Nate Soares doesn't set aside money for his 401(k). "I just don't expect the world to be around," he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I'd heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which "everything is fully automated," he told me. That is, "if we're around."
- North America > United States > New York (0.05)
- North America > United States > California (0.04)
- Government (1.00)
- Media (0.69)
- Information Technology (0.69)
AI Doomers Had Their Big Moment
Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. Toner hadn't yet joined OpenAI's board and hadn't yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the effective-altruism movement, when she first connected with the small community of intellectuals who care about AI risk. "It was, like, 50 people," she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline. The deep-learning revolution was drawing new converts to the cause.
- North America > United States > New Mexico (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
A Letter Prompted Talk of AI Doomsday. Many Who Signed Weren't Actually AI Doomers
This March, nearly 35,000 AI researchers, technologists, entrepreneurs, and concerned citizens signed an open letter from the nonprofit Future of Life Institute that called for a "pause" on AI development, due to the risks to humanity revealed in the capabilities of programs such as ChatGPT. "Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves ... Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?" I could still be proven wrong, but almost six months later and with AI development faster than ever, civilization hasn't crumbled. Heck, Bing Chat, Microsoft's "revolutionary," ChatGPT-infused search oracle, hasn't even displaced Google as the leader in search. So what should we make of the letter and similar sci-fi warnings backed by worthy names about the risks posed by AI? Two enterprising students at MIT, Isabella Struckman and Sofie Kupiec, reached out to the first hundred signatories of the letter calling for a pause on AI development to learn more about their motivations and concerns.
What Really Made Geoffrey Hinton Into an AI Doomer
Geoffrey Hinton, perhaps the most important person in the recent history of artificial intelligence, recently sent me a video of Snoop Dogg. In the clip of a discussion panel, the rapper expresses profane amazement at how artificial intelligence software, such as ChatGPT, can now hold a coherent and meaningful conversation. "Then I heard the old dude that created AI saying, 'This is not safe'cause the AIs got their own mind and these motherfuckers gonna start doing their own shit,'" Snoop says. "And I'm like, 'Is we in a fucking movie right now or what?'" The "old dude" is, of course, Hinton.
Are You an AI Doomer?. We're all gonna die and other AI…
I was recently recommended to watch an interview with Eliezer Yudkowsky created by YouTubers and all-round crypto smart guys David and Ryan from "Bankless", a crypto and blockchain education company. Here's the link if you have a spare two hours, it's an equally scary and fascinating watch: I used to obsessively watch Bankless videos back in 2021 during the last crypto/NFT boom, but since the bear market of 2022 set in, I kind of lost some of my mojo for crypto. Anyhow, this video, was a dramatic departure from the Bankless crew's regular weekly roundup of the crypto markets, where they get deep into the weeds of the latest developments in the space. Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a subscriber.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.70)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.69)